Goto

Collaborating Authors

 wasserstein distance


On the Convergence and Robustness of Training GANs with Regularized Optimal Transport

Neural Information Processing Systems

Generative Adversarial Networks (GANs) are one of the most practical methods for learning data distributions. A popular GAN formulation is based on the use of Wasserstein distance as a metric between probability distributions. Unfortunately, minimizing the Wasserstein distance between the data distribution and the generative model distribution is a computationally challenging problem as its objective is non-convex, non-smooth, and even hard to compute. In this work, we show that obtaining gradient information of the smoothed Wasserstein GAN formulation, which is based on regularized Optimal Transport (OT), is computationally effortless and hence one can apply first order optimization methods to minimize this objective. Consequently, we establish theoretical convergence guarantee to stationarity for a proposed class of GAN optimization algorithms. Unlike the original non-smooth formulation, our algorithm only requires solving the discriminator to approximate optimality. We apply our method to learning MNIST digits as well as CIFAR-10 images. Our experiments show that our method is computationally efficient and generates images comparable to the state of the art algorithms given the same architecture and computational power.







Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee Y uanshi Liu, Cong Fang, Tong Zhang School of Intelligence Science and Technology, Peking University

Neural Information Processing Systems

Sampling from a high-dimensional distribution serves as one of the key components in statistics, machine learning, and scientific computing, and constitutes the foundation of the fields including Bayesian statistics and generative models [Liu and Liu, 2001, Brooks et al., 2011, Song et al.,


Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee Y uanshi Liu, Cong Fang, Tong Zhang School of Intelligence Science and Technology, Peking University

Neural Information Processing Systems

Sampling from a high-dimensional distribution serves as one of the key components in statistics, machine learning, and scientific computing, and constitutes the foundation of the fields including Bayesian statistics and generative models [Liu and Liu, 2001, Brooks et al., 2011, Song et al.,



Instance-Optimal Private Density Estimation in the Wasserstein Distance

Neural Information Processing Systems

Estimating the density of a distribution from samples is a fundamental problem in statistics. In many practical settings, the Wasserstein distance is an appropriate error metric for density estimation. For example, when estimating population densities in a geographic region, a small Wasserstein distance means that the estimate is able to capture roughly where the population mass is. In this work we study differentially private density estimation in the Wasserstein distance. We design and analyze instance-optimal algorithms for this problem that can adapt to easy instances.